Sentence processing in spiking neurons: A biologically plausible left-corner parser
نویسندگان
چکیده
A long-standing challenge in cognitive science is how neurons could be capable of the flexible structured processing that is the hallmark of cognition. We present a spiking neural model that can be given an input sequence of words (a sentence) and produces a structured tree-like representation indicating the parts of speech it has identified and their relations to each other. While this system is based on a standard left-corner parser for constituency grammars, the neural nature of the model leads to new capabilities not seen in classical implementations. For example, the model gracefully decays in performance as the sentence structure gets larger. Unlike previous attempts at building neural parsing systems, this model is highly robust to neural damage, can be applied to any binary-constituency grammar, and requires relatively few neurons (~150,000).
منابع مشابه
Improving the Izhikevich Model Based on Rat Basolateral Amygdala and Hippocampus Neurons, and Recognizing Their Possible Firing Patterns
Introduction: Identifying the potential firing patterns following different brain regions under normal and abnormal conditions increases our understanding of events at the level of neural interactions in the brain. Furthermore, it is important to be capable of modeling the potential neural activities to build precise artificial neural networks. The Izhikevich model is one of the simplest biolog...
متن کاملBiologically Plausible, Human-scale Knowledge Representation
Several approaches to implementing symbol-like representations in neurally plausible models have been proposed. These approaches include binding through synchrony (Shastri & Ajjanagadde, ), "mesh" binding (van der Velde & de Kamps, ), and conjunctive binding (Smolensky, ). Recent theoretical work has suggested that most of these methods will not scale well, that is, that they cannot encode stru...
متن کاملAn Incremental Connectionist Phrase Structure Parser
This abstract outlines a parser implemented in a connectionist model of short term memory and reasoning 1 . This connectionist architecture, proposed by Shastri in [Shastri and Ajjanagadde, 1990], preserves the symbolic interpretation of the information it stores and manipulates, but does its computations with nodes which have roughly the same computational properties as neurons. The parser rec...
متن کاملLookahead In Deterministic Left-Corner Parsing
To support incremental interpretation, any model of human sentence processing must not only process the sentence incrementally, it must to some degree restrict the number of analyses which it produces for any sentence prefix. Deterministic parsing takes the extreme position that there can only be one analysis for any sentence prefix. Experiments with an incremental statistical parser show that ...
متن کاملSpikeProp: backpropagation for networks of spiking neurons
For a network of spiking neurons with reasonable postsynaptic potentials, we derive a supervised learning rule akin to traditional error-back-propagation, SpikeProp and show how to overcome the discontinuities introduced by thresholding. Using this learning algorithm, we demonstrate how networks of spiking neurons with biologically plausible time-constants can perform complex non-linear classif...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2014